71 research outputs found
Air Learning: An AI Research Platform for Algorithm-Hardware Benchmarking of Autonomous Aerial Robots
We introduce Air Learning, an open-source simulator, and a gym environment
for deep reinforcement learning research on resource-constrained aerial robots.
Equipped with domain randomization, Air Learning exposes a UAV agent to a
diverse set of challenging scenarios. We seed the toolset with point-to-point
obstacle avoidance tasks in three different environments and Deep Q Networks
(DQN) and Proximal Policy Optimization (PPO) trainers. Air Learning assesses
the policies' performance under various quality-of-flight (QoF) metrics, such
as the energy consumed, endurance, and the average trajectory length, on
resource-constrained embedded platforms like a Raspberry Pi. We find that the
trajectories on an embedded Ras-Pi are vastly different from those predicted on
a high-end desktop system, resulting in up to longer trajectories in one
of the environments. To understand the source of such discrepancies, we use Air
Learning to artificially degrade high-end desktop performance to mimic what
happens on a low-end embedded system. We then propose a mitigation technique
that uses the hardware-in-the-loop to determine the latency distribution of
running the policy on the target platform (onboard compute on aerial robot). A
randomly sampled latency from the latency distribution is then added as an
artificial delay within the training loop. Training the policy with artificial
delays allows us to minimize the hardware gap (discrepancy in the flight time
metric reduced from 37.73\% to 0.5\%). Thus, Air Learning with
hardware-in-the-loop characterizes those differences and exposes how the
onboard compute's choice affects the aerial robot's performance. We also
conduct reliability studies to assess the effect of sensor failures on the
learned policies. All put together, \airl enables a broad class of deep RL
research on UAVs. The source code is available
at:~\texttt{\url{http://bit.ly/2JNAVb6}}.Comment: To Appear in Springer Machine Learning Journal (Special Issue on
Reinforcement Learning for Real Life
One Size Does Not Fit All: Quantifying and Exposing the Accuracy-Latency Trade-off in Machine Learning Cloud Service APIs via Tolerance Tiers
Today's cloud service architectures follow a "one size fits all" deployment
strategy where the same service version instantiation is provided to the end
users. However, consumers are broad and different applications have different
accuracy and responsiveness requirements, which as we demonstrate renders the
"one size fits all" approach inefficient in practice. We use a production-grade
speech recognition engine, which serves several thousands of users, and an open
source computer vision based system, to explain our point. To overcome the
limitations of the "one size fits all" approach, we recommend Tolerance Tiers
where each MLaaS tier exposes an accuracy/responsiveness characteristic, and
consumers can programmatically select a tier. We evaluate our proposal on the
CPU-based automatic speech recognition (ASR) engine and cutting-edge neural
networks for image classification deployed on both CPUs and GPUs. The results
show that our proposed approach provides an MLaaS cloud service architecture
that can be tuned by the end API user or consumer to outperform the
conventional "one size fits all" approach.Comment: 2019 IEEE International Symposium on Performance Analysis of Systems
and Software (ISPASS
Recommended from our members
Dimetrodon: Processor-level Preventive Thermal Management via Idle Cycle Injection
Processor-level dynamic thermal management techniques have long targeted worst-case thermal margins. We examine the thermal-performance trade-offs in average-case, preventive thermal management by actively degrading application performance to achieve long-term thermal control. We propose Dimetrodon, the use of idle cycle injection, a flexible, per-thread technique, as a preventive thermal management mechanism and demonstrate its efficiency compared to hardware techniques in a commodity operating system on real hardware under throughput and latency-sensitive real-world workloads. Compared to hardware techniques that also lack flexibility, Dimetrodon achieves favorable trade-offs for temperature reductions up to 30% due to rapid heat dissipation during short idle intervals.Engineering and Applied Science
Exceeding Conservative Limits: A Consolidated Analysis on Modern Hardware Margins
Modern large-scale computing systems (data centers, supercomputers, cloud and
edge setups and high-end cyber-physical systems) employ heterogeneous
architectures that consist of multicore CPUs, general-purpose many-core GPUs,
and programmable FPGAs. The effective utilization of these architectures poses
several challenges, among which a primary one is power consumption. Voltage
reduction is one of the most efficient methods to reduce power consumption of a
chip. With the galloping adoption of hardware accelerators (i.e., GPUs and
FPGAs) in large datacenters and other large-scale computing infrastructures, a
comprehensive evaluation of the safe voltage reduction levels for each
different chip can be employed for efficient reduction of the total power. We
present a survey of recent studies in voltage margins reduction at the system
level for modern CPUs, GPUs and FPGAs. The pessimistic voltage guardbands
inserted by the silicon vendors can be exploited in all devices for significant
power savings. On average, voltage reduction can reach 12% in multicore CPUs,
20% in manycore GPUs and 39% in FPGAs.Comment: Accepted for publication in IEEE Transactions on Device and Materials
Reliabilit
Measuring Code Optimization Impact on Voltage Noise
In this paper, we characterize the impact of compiler optimizations on voltage noise. While intuition may suggest that the better processor utilization ensured by optimizing compilers results in a small amount of voltage variation, our measurements on a Intel® Core™2 Due Processor show the opposite - the majority of SPEC 2006 benchmarks exhibit more voltage droops when aggressively optimized. We show that this increase in noise could be sufficient for a net performance decrease in a typical case, resilient design.Engineering and Applied Science
- …